skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Ororbia II, Alexander G."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This paper improves on several aspects of a sieve-based event ordering architecture‚ CAEVO (Chambers et al.‚ 2014)‚ which creates globally consistent temporal relations between events and time expressions. First‚ we examine the usage of word embeddings and semantic role features. With the incorporation of these new features‚ we demonstrate a 5% relative F1 gain over our replicated version of CAEVO. Second‚ we reformulate the architecture’s sieve-based inference algorithm as a prediction reranking method that approximately optimizes a scoring function computed using classifier precisions. Within this prediction reranking framework‚ we propose an alternative scoring function‚ showing an 8.8% relative gain over the original CAEVO. We further include an in-depth analysis of one of the main datasets that is used to evaluate temporal classifiers‚ and we show that in spite of the density of this corpus‚ there is still a danger of overfitting. While this paper focuses on temporal ordering‚ its results are applicable to other areas that use sievebased architectures. 
    more » « less
  2. We present a novel fine-tuning algorithm in a deep hybrid architecture for semi-supervised text classification. During each increment of the online learning process‚ the fine-tuning algorithm serves as a top-down mechanism for pseudo-jointly modifying model parameters following a bottom-up generative learning pass. The resulting model‚ trained under what we call the Bottom-Up-Top-Down learning algorithm‚ is shown to outperform a variety of competitive models and baselines trained across a wide range of splits between supervised and unsupervised training data. 
    more » « less